SteerBench: a benchmark suite for evaluating steering behaviors

نویسندگان

  • Shawn Singh
  • Mubbasir Kapadia
  • Petros Faloutsos
  • Glenn Reinman
چکیده

Steering is a challenging task, required by nearly all agents in virtual worlds. There is a large and growing number of approaches for steering, and it is becoming increasingly important to ask a fundamental question: how can we objectively compare steering algorithms? To our knowledge, there is no standard way of evaluating or comparing the quality of steering solutions. This paper presents SteerBench: a benchmark framework for objectively evaluating steering behaviors for virtual agents. We propose a diverse set of test cases, metrics of evaluation, and a scoring method that can be used to compare different steering algorithms. Our framework can be easily customized by a user to evaluate specific behaviors and new test cases. We demonstrate our benchmark process on two example steering algorithms, showing the insight gained from our metrics. We hope that this framework can grow into a standard for steering evaluation. Copyright © 2009 John Wiley & Sons, Ltd.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Watch Out! A Framework for Evaluating Steering Behaviors

Interactive virtual worlds feature dynamic characters that must navigate through a variety of landscapes populated with various obstacles and other agents. The process of navigating to a desired location within a dynamic environment is the problem of steering. While there are many approaches to steering, to our knowledge there is no standard way of evaluating and comparing the quality of such s...

متن کامل

A Benchmark Suite for Evaluating the Performance of the WebODE Ontology Engineering Platform

Ontology tools play a key role in the development and maintenance of the Semantic Web. Hence, we need in one hand to objectively evaluate these tools, in order to analyse whether they can deal with actual and future requirements, and in the other hand to develop benchmark suites for performing these evaluations. In this paper, we describe the method we have followed to design and implement a be...

متن کامل

An Open Benchmark Suite for Evaluating Computer Architecture on Bioinformatics and Life Science Applications

In this paper, we propose BIOPERF, a definitive benchmark suite of representative applications from the biology and life sciences community, where the codes are carefully selected to span a breadth of algorithms and performance characteristics. The BIOPERF suite is available from www.bioperf. org and includes benchmark source code, input datasets of various sizes, and information for compiling ...

متن کامل

Defining a Benchmark Suite for Evaluating the Import of OWLLite Ontologies

Semantic Web tools should be able to correctly interchange ontologies and, therefore, to interoperate. This interchange is not always a straightforward task if tools have different underlying knowledge representation paradigms. This paper describes the process followed to define a benchmark suite for evaluating the OWL import capabilities of ontology development tools in a benchmarking activity...

متن کامل

Benchmarking Modern Multiprocessors

Benchmarking has become one of the most important methods for quantitative performance evaluation of processor and computer system designs. Benchmarking of modern multiprocessors such as chip multiprocessors is challenging because of their application domain, scalability and parallelism requirements. In my thesis, I have developed a methodology to design effective benchmark suites and demonstra...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Journal of Visualization and Computer Animation

دوره 20  شماره 

صفحات  -

تاریخ انتشار 2009